Exploiting Data Sparsity in Parallel Matrix Powers Computations
نویسندگان
چکیده
We derive a new parallel communication-avoiding matrix powers algorithm for matrices of the form A = D + USV H , where D is sparse and USV H has low rank and is possibly dense. We demonstrate that, with respect to the cost of computing k sparse matrix-vector multiplications, our algorithm asymptotically reduces the parallel latency by a factor of O(k) for small additional bandwidth and computation costs. Using problems from real-world applications, our performance model predicts up to 13× speedups on petascale machines.
منابع مشابه
Time Complexity and Parallel Speedup to Compute the Gamma Summarization Matrix
We study the serial and parallel computation of Γ (Gamma), a comprehensive data summarization matrix for linear Gaussian models, widely used in big data analytics. Computing Gamma can be reduced to a single matrix multiplication with the data set, where such multiplication can be evaluated as a sum of vector outer products, which enables incremental and parallel computation, essential features ...
متن کاملIncremental Computation of Linear Machine Learning Models in Parallel Database Systems
We study the serial and parallel computation of Γ (Gamma), a comprehensive data summarization matrix for linear machine learning models widely used in big data analytics. We prove that computing Gamma can be reduced to a single matrix multiplication with the data set, where such multiplication can be evaluated as a sum of vector outer products, which enables incremental and parallel computation...
متن کاملDesigning Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis
Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism) can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is...
متن کاملFast and Scalable Parallel Algorithms for Matrix Chain Product and Matrix Powers on Reconfigurable Pipelined Optical Buses
Given N matrices A1, A2, ..., AN of size N × N, the matrix chain product problem is to compute A1 × A2 × ...× AN. Given an N × N matrix A, the matrix powers problem is to calculate the first N powers of A, i.e., A, A, A, ..., A. Both problems are important in conducting many matrix manipulations such as computing the characteristic polynomial, determinant, rank, and inverse of a matrix, and in ...
متن کاملExploiting sparsity in linear and nonlinear matrix inequalities via positive semidefinite matrix completion
Abstract A basic framework for exploiting sparsity via positive semidefinite matrix completion is presented for an optimization problem with linear and nonlinear matrix inequalities. The sparsity, characterized with a chordal graph structure, can be detected in the variable matrix or in a linear or nonlinear matrix-inequality constraint of the problem. We classify the sparsity in two types, the...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013